119 research outputs found

    A test problem for visual investigation of high-dimensional multi-objective search

    Get PDF
    An inherent problem in multiobjective optimization is that the visual observation of solution vectors with four or more objectives is infeasible, which brings major difficulties for algorithmic design, examination, and development. This paper presents a test problem, called the Rectangle problem, to aid the visual investigation of high-dimensional multiobjective search. Key features of the Rectangle problem are that the Pareto optimal solutions 1) lie in a rectangle in the two-variable decision space and 2) are similar (in the sense of Euclidean geometry) to their images in the four-dimensional objective space. In this case, it is easy to examine the behavior of objective vectors in terms of both convergence and diversity, by observing their proximity to the optimal rectangle and their distribution in the rectangle, respectively, in the decision space. Fifteen algorithms are investigated. Underperformance of Pareto-based algorithms as well as most state-of-the-art many-objective algorithms indicates that the proposed problem not only is a good tool to help visually understand the behavior of multiobjective search in a high-dimensional objective space but also can be used as a challenging benchmark function to test algorithms' ability in balancing the convergence and diversity of solutions

    A multi-granularity locally optimal prototype-based approach for classification

    Get PDF
    Prototype-based approaches generally provide better explainability and are widely used for classification. However, the majority of them suffer from system obesity and lack transparency on complex problems. In this paper, a novel classification approach with a multi-layered system structure self-organized from data is proposed. This approach is able to identify local peaks of multi-modal density derived from static data and filter out more representative ones at multiple levels of granularity acting as prototypes. These prototypes are then optimized to their locally optimal positions in the data space and arranged in layers with meaningful dense links in-between to form pyramidal hierarchies based on the respective levels of granularity accordingly. After being primed offline, the constructed classification model is capable of self-developing continuously from streaming data to self-expend its knowledge base. The proposed approach offers higher transparency and is convenient for visualization thanks to the hierarchical nested architecture. Its system identification process is objective, data-driven and free from prior assumptions on data generation model with user- and problem- specific parameters. Its decision-making process follows the “nearest prototype” principle, and is highly explainable and traceable. Numerical examples on a wide range of benchmark problems demonstrate its high performance

    MOEAs Are Stuck in a Different Area at a Time

    Get PDF

    How to Evaluate Solutions in Pareto-based Search-Based Software Engineering? A Critical Review and Methodological Guidance

    Full text link
    With modern requirements, there is an increasing tendency of considering multiple objectives/criteria simultaneously in many Software Engineering (SE) scenarios. Such a multi-objective optimization scenario comes with an important issue -- how to evaluate the outcome of optimization algorithms, which typically is a set of incomparable solutions (i.e., being Pareto non-dominated to each other). This issue can be challenging for the SE community, particularly for practitioners of Search-Based SE (SBSE). On one hand, multi-objective optimization could still be relatively new to SE/SBSE researchers, who may not be able to identify the right evaluation methods for their problems. On the other hand, simply following the evaluation methods for general multi-objective optimization problems may not be appropriate for specific SE problems, especially when the problem nature or decision maker's preferences are explicitly/implicitly available. This has been well echoed in the literature by various inappropriate/inadequate selection and inaccurate/misleading use of evaluation methods. In this paper, we first carry out a systematic and critical review of quality evaluation for multi-objective optimization in SBSE. We survey 717 papers published between 2009 and 2019 from 36 venues in seven repositories, and select 95 prominent studies, through which we identify five important but overlooked issues in the area. We then conduct an in-depth analysis of quality evaluation indicators/methods and general situations in SBSE, which, together with the identified issues, enables us to codify a methodological guidance for selecting and using evaluation methods in different SBSE scenarios.Comment: This paper has been accepted by IEEE Transactions on Software Engineering, available as full OA: https://ieeexplore.ieee.org/document/925218
    corecore